We will take data coming from different Open Field trials and quantify using:
We will also develop a tracking method from optic flow to get xy position for the mice. We will compare methods.
Observations: 95,604
Variables: 5
$ filename <chr> "Trial1_opt_flow.csv", "Trial1_opt_flow.csv", "Trial1_o…
$ datetime <dttm> 2020-04-27 11:28:30, 2020-04-27 11:28:30, 2020-04-27 1…
$ movement <dbl> 15867.8510, 15562.1410, 15902.6850, 16263.2740, 16554.3…
$ x <dbl> 200, 196, 188, 178, 170, 164, 158, 152, 146, 140, 136, …
$ y <dbl> 344, 354, 362, 368, 376, 388, 398, 406, 408, 416, 422, …
This step also involves interpolation in the case that there are missing values on the data.
[1] "Data was interpolated and no more NAs are present"
We visualy inspect the smoothing/interpolation process.
Using the interpolated xy positions from flow, we can compare to the HSV method.
Ideally, we want to have something that can run on ‘real-time’ (between 20 and 30 frames per second). HSV methods can run much faster but will not be accurate on a homecage environment.
We calculate the difference between techniques in point to point estimation by creating an euclidean distance between the two measurements.
\(d(t) = \sqrt{(X(t)_{flow}-X(t)_{HSV})^2}\)
where
\(X(t)_{flow} = (x(t), y(t))\)
Ideally, we expect \(d(t)\) normally distributed with mean 0 and var
\(d(t) \sim \mathcal{N}(0, \sigma^2)\)
Visually,
Here’s the example of the same points, where the segment between points is the \(d(t)\) that we want to estimate.
We see quite a lot of error and error dependent on the trial.
The distribution is bimodal and heavily skewed (probably some wrong detection jump is responsible for the very extreme ones).
Most importantly 97.13% of the data is below 50px (~5 cm).
We have 3 measures of pixel to pixel movement
We start with optic flow derived vs HSV and see whether we can get get a good correlation.
The correlation between movement and optic flow is quite good.
We can further check using cummulative distance on the px to px analysis.
px_dist from flow vs px_dist_hsv
Another way to visualize these correlations is ploting the \(r\) value for each trial vs each method.
Or using a correlation matrix displaying the median correlation for each pair.
| CPU | cores | RAM |
|---|---|---|
| Intel(R) Core(TM) i7-6500U CPU @ 2.50GHz | 4 | 16.67 Gb |
R version 3.6.3 (2020-02-29)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 18.04.4 LTS
Matrix products: default
BLAS: /usr/lib/x86_64-linux-gnu/atlas/libblas.so.3.10.3
LAPACK: /usr/lib/x86_64-linux-gnu/atlas/liblapack.so.3.10.3
locale:
[1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
[3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8
[5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
[7] LC_PAPER=en_US.UTF-8 LC_NAME=C
[9] LC_ADDRESS=C LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] DiagrammeR_1.0.1 RColorBrewer_1.1-2 cowplot_1.0.0
[4] forcats_0.4.0 stringr_1.4.0 dplyr_0.8.3
[7] purrr_0.3.2 readr_1.3.1 tidyr_1.0.0
[10] tibble_2.1.3 ggplot2_3.3.0 tidyverse_1.2.1
loaded via a namespace (and not attached):
[1] nlme_3.1-144 lubridate_1.7.4 doParallel_1.0.15
[4] progress_1.2.2 httr_1.4.1 tools_3.6.3
[7] backports_1.1.4 utf8_1.1.4 R6_2.4.0
[10] vipor_0.4.5 mgcv_1.8-31 colorspace_1.4-1
[13] withr_2.1.2 tidyselect_0.2.5 gridExtra_2.3
[16] prettyunits_1.0.2 compiler_3.6.3 cli_1.1.0
[19] rvest_0.3.4 xml2_1.2.2 influenceR_0.1.0
[22] labeling_0.3 scales_1.1.0.9000 digest_0.6.20
[25] rmarkdown_1.15 benchmarkmeData_1.0.3 pkgconfig_2.0.2
[28] htmltools_0.3.6 highr_0.8 htmlwidgets_1.3
[31] rlang_0.4.4 readxl_1.3.1 rstudioapi_0.10
[34] visNetwork_2.0.8 farver_2.0.3 generics_0.0.2
[37] zoo_1.8-6 jsonlite_1.6 rgexf_0.15.3
[40] magrittr_1.5 Matrix_1.2-18 Rcpp_1.0.2
[43] ggbeeswarm_0.6.0 munsell_0.5.0 fansi_0.4.0
[46] viridis_0.5.1 lifecycle_0.1.0 stringi_1.4.3
[49] yaml_2.2.0 MASS_7.3-51.5 plyr_1.8.4
[52] grid_3.6.3 parallel_3.6.3 crayon_1.3.4
[55] lattice_0.20-40 haven_2.1.1 splines_3.6.3
[58] hms_0.5.0 magick_2.2 zeallot_0.1.0
[61] knitr_1.24 pillar_1.4.2 igraph_1.2.4.1
[64] codetools_0.2-16 XML_3.98-1.20 glue_1.3.1
[67] evaluate_0.14 gganimate_1.0.5 downloader_0.4
[70] modelr_0.1.5 vctrs_0.2.0 tweenr_1.0.1
[73] foreach_1.5.0 cellranger_1.1.0 gtable_0.3.0
[76] benchmarkme_1.0.3 assertthat_0.2.1 xfun_0.9
[79] broom_0.5.2 viridisLite_0.3.0 iterators_1.0.12
[82] beeswarm_0.2.3 Rook_1.1-1 ellipsis_0.2.0.1
[85] brew_1.0-6